过去已经表明,与解决多模式问题生成器的解决实例相比,多座丘陵策略与标准遗传算法相比有利。我们扩展了这项工作,并验证遗传算法中多样性保存技术的利用是否改变了比较结果。在两种情况下,我们这样做:(1)​​目标是找到全局最佳距离时,(2)当目标是找到所有Optima时。进行了数学分析,用于多设山丘算法,并通过实证研究进行了经验研究,以求解多模式问题生成器的实例,其中包括山丘策略以及遗传算法的数量,并使用遗传算法进行了元素。尽管小甲基元素改善了遗传算法的性能,但它仍然不如这类问题上的多尽山关闭策略。还提出了一种理想化的细分策略,并认为它的性能应接近任何进化算法在此类问题上可以做到的。
translated by 谷歌翻译
Early recognition of clinical deterioration (CD) has vital importance in patients' survival from exacerbation or death. Electronic health records (EHRs) data have been widely employed in Early Warning Scores (EWS) to measure CD risk in hospitalized patients. Recently, EHRs data have been utilized in Machine Learning (ML) models to predict mortality and CD. The ML models have shown superior performance in CD prediction compared to EWS. Since EHRs data are structured and tabular, conventional ML models are generally applied to them, and less effort is put into evaluating the artificial neural network's performance on EHRs data. Thus, in this article, an extremely boosted neural network (XBNet) is used to predict CD, and its performance is compared to eXtreme Gradient Boosting (XGBoost) and random forest (RF) models. For this purpose, 103,105 samples from thirteen Brazilian hospitals are used to generate the models. Moreover, the principal component analysis (PCA) is employed to verify whether it can improve the adopted models' performance. The performance of ML models and Modified Early Warning Score (MEWS), an EWS candidate, are evaluated in CD prediction regarding the accuracy, precision, recall, F1-score, and geometric mean (G-mean) metrics in a 10-fold cross-validation approach. According to the experiments, the XGBoost model obtained the best results in predicting CD among Brazilian hospitals' data.
translated by 谷歌翻译
这项工作提出了基于差异自动编码器卷积编码器产生的特征的概率分类器的内核选择方法。特别是,开发的方法允许选择最相关的潜在变量子集。在拟议的实现中,每个潜在变量都是从与最后一个编码器的卷积层的单个内核相关的分布中取样的,因为为每个内核创建了个体分布。因此,在采样的潜在变量上选择相关功能使得可以执行内核选择,从而过滤非信息性特征和内核。这样的导致模型参数数量减少。评估包装器和过滤器方法以进行特征选择。第二个特别相关,因为它仅基于内核的分布。通过测量所有分布之间的kullback-leibler差异来评估,假设其分布更相似的内核可以被丢弃。该假设得到了证实,因为观察到最相似的内核不会传达相关信息,并且可以去除。结果,所提出的方法适用于开发用于资源约束设备的应用程序。
translated by 谷歌翻译
这项工作提出了一种用于概率分类器的新算法的Proboost。该算法使用每个训练样本的认知不确定性来确定最具挑战性/不确定的样本。然后,对于下一个弱学习者,这些样本的相关性就会增加,产生序列,该序列逐渐侧重于发现具有最高不确定性的样品。最后,将弱学习者的输出组合成分类器的加权集合。提出了三种方法来操纵训练集:根据弱学习者估计的不确定性,取样,过采样和加权训练样本。此外,还研究了有关集成组合的两种方法。本文所考虑的弱学习者是标准的卷积神经网络,而不确定性估计使用的概率模型则使用变异推理或蒙特卡洛辍学。在MNIST基准数据集上进行的实验评估表明,ProbOOST可以显着改善性能。通过评估这项工作中提出的相对可实现的改进,进一步强调了结果,该指标表明,只有四个弱学习者的模型导致该指标的改进超过12%(出于准确性,灵敏度或特异性),与没有探针的模型相比。
translated by 谷歌翻译
制定了具有机器学习模拟(骆驼)项目的宇宙学和天体物理学,通过数千名宇宙的流体动力模拟和机器学习将宇宙学与天体物理学结合起来。骆驼包含4,233个宇宙学仿真,2,049个n-body和2,184个最先进的流体动力模拟,在参数空间中采样巨大的体积。在本文中,我们介绍了骆驼公共数据发布,描述了骆驼模拟的特性和由它们产生的各种数据产品,包括光环,次麦,银河系和空隙目录,功率谱,Bispectra,Lyman - $ \ Alpha $光谱,概率分布函数,光环径向轮廓和X射线光子列表。我们还释放了超过骆驼 - 山姆的数十亿个星系的目录:与Santa Cruz半分析模型相结合的大量N身体模拟。我们释放包含350多个Terabytes的所有数据,并包含143,922个快照,数百万光环,星系和摘要统计数据。我们提供有关如何访问,下载,读取和处理数据AT \ URL {https://camels.readthedocs.io}的进一步技术详细信息。
translated by 谷歌翻译
生物医学决策涉及来自不同传感器或来自不同信道的多个信号处理。在这两种情况下,信息融合发挥着重要作用。在脑电图循环交替模式中,在这项工作中进行了深度学习的脑电图通道的特征级融合。通过两个优化算法,即遗传算法和粒子群优化优化了频道选择,融合和分类程序。通过融合来自多个脑电图信道的信息来评估开发的方法,用于夜间胸癫痫和没有任何神经疾病的患者的患者,与其他艺术艺术的工作相比,这在显着更具挑战性。结果表明,两种优化算法都选择了一种具有类似特征级融合的可比结构,包括三个脑电图通道,这与帽协议一致,以确保多个通道的唤起帽检测。此外,两种优化模型在接收器的工作特性曲线下达到了0.82的一个区域,平均精度为77%至79%,这是在专业协议的上部范围内的结果。尽管数据集是困难的数据集,所提出的方法仍处于最佳状态的上层,并且具有困难的数据集,并且具有在不需要任何手动过程的情况下提供全自动分析的优点。最终,模型显示出抗噪声和有弹性的多声道损耗。
translated by 谷歌翻译
当没有足够的数据来证实客户的身份时,身份盗窃是信贷贷方的主要问题。在超级应用程序中,包含许多不同服务的大型数字平台,此问题更为相关;在一个分支中丢失客户通常意味着在其他服务中丢失它们。在本文中,我们审查了超级应用程序信息,手机线数据和传统信用风险变量的特征级融合的有效性,以便早日检测身份盗窃信用卡欺诈。通过提出的框架,我们在使用投入是替代数据和传统信贷局数据融合的模型时实现了更好的性能,从而实现了0.81的ROC AUC评分。我们从信用贷方的数字平台数据库中评估我们的方法超过大约90,000个用户。评估是使用传统的ML指标进行的,但金融成本也是如此。
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
Advances in computer vision and machine learning techniques have led to significant development in 2D and 3D human pose estimation from RGB cameras, LiDAR, and radars. However, human pose estimation from images is adversely affected by occlusion and lighting, which are common in many scenarios of interest. Radar and LiDAR technologies, on the other hand, need specialized hardware that is expensive and power-intensive. Furthermore, placing these sensors in non-public areas raises significant privacy concerns. To address these limitations, recent research has explored the use of WiFi antennas (1D sensors) for body segmentation and key-point body detection. This paper further expands on the use of the WiFi signal in combination with deep learning architectures, commonly used in computer vision, to estimate dense human pose correspondence. We developed a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions. The results of the study reveal that our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches, by utilizing WiFi signals as the only input. This paves the way for low-cost, broadly accessible, and privacy-preserving algorithms for human sensing.
translated by 谷歌翻译
Attention mechanisms form a core component of several successful deep learning architectures, and are based on one key idea: ''The output depends only on a small (but unknown) segment of the input.'' In several practical applications like image captioning and language translation, this is mostly true. In trained models with an attention mechanism, the outputs of an intermediate module that encodes the segment of input responsible for the output is often used as a way to peek into the `reasoning` of the network. We make such a notion more precise for a variant of the classification problem that we term selective dependence classification (SDC) when used with attention model architectures. Under such a setting, we demonstrate various error modes where an attention model can be accurate but fail to be interpretable, and show that such models do occur as a result of training. We illustrate various situations that can accentuate and mitigate this behaviour. Finally, we use our objective definition of interpretability for SDC tasks to evaluate a few attention model learning algorithms designed to encourage sparsity and demonstrate that these algorithms help improve interpretability.
translated by 谷歌翻译